In his bestseller, Life 3.0, Max Tegmark welcomes us into arguably the most important conversation of our time. We are at a critical point in our human timeline and with advanced Artificial Intelligence (AI) knocking on our doorstep, the decisions we make now will be essential to our success as a human race. Tegmark describes a three stages of life model. The first, life 1.0, is a simple biological stage, describing our simple-organism predecessors which populated early earth. The next, life 2.0, describes intelligent, cultured beings who can learn (humans). Finally, life 3.0 describes a new generation of life and beings whom we are yet to meet. Life 3.0 is the idea of intelligence which is free from a body and can exist in a wide range of hardware (advanced machines and computers). The main principle behind this life structure is:
-
Life 1.0 is simple and cannot upgrade its software (cannot learn) or hardware (cannot live independently from its body)
-
Life 2.0 is more complex and can upgrade its software (it can learn new languages and skills), however cannot upgrade its hardware
-
Life 3.0 is the most complex form of life. It consists of machines which can upgrade their software and hardware
Life 3.0 is an exciting yet scary possibility and as Tegmark expresses it, “finally free from its evolutionary shackles.”
The concept initially seems far-fetched and unrealistic, however once Tegmark breaks down the theories of intelligence, memory, computation and learning, the idea becomes a lot more plausible. We already know machines can be intelligent. In 2016 DeepMind’s Go engine, AlphaGo, shocked the world with a surprising victory over the incredibly dominant Go world champion, Lee Sedol. This type of intelligence is stored in a computers memory. However memory does not only have to be inside a computer. Tegmark gives the brilliant example of how intelligence can be represented by merely eggs in a package. We can think of an egg being in a slot as a 1 and an empty slot as a 0. It is the exact same principle which computers use to store information and instructions. From this we can deduce intelligence is ‘substrate independent’ and can exist in many different forms. Therefore the idea of intelligence in a diverse range of robots may not seem so far-fetched all of a sudden. Moreover, the primary computation in a computer takes place using NAND gates. In computer science, any well defined function can be built up from NAND gates. These gates have been given the term ‘computronium’ since they can perform computations. However they are not the only substrate which can do so. Before we had today’s computers, computation could also be done on paper strips for example. Finally the idea of learning can also be implemented in machines. We see this with deep neural networks which vaguely resemble the inner workings of our brain. The networks adapt and change their synapses until they can process a computation in an effective manner – they learn! We now see that memory and computation is substrate independent (can be done on many substances) and intelligence and learning come as a result of these two. Therefore what is stop intelligent machines from coming into existence?
Another widely discussed topic in the book is the theory of artificial general intelligence. This is the idea that a computer can learn any task that humans can and potentially more, reprogramming itself and continually improving itself. Many leading AI researchers agree this could be the best or worst thing to happen to humans. Should a program become smarter than us, could it become a threat to us? This is the general consensus around AGI amongst the majority of the population. Many people oppose AI development simply because films like the Terminator have scared them into this conclusion. Consequently, Tegmark begins his book with a fictional story about ‘Prometheus’ – an AGI program which helps solve the human race’s biggest challenges. As a result of its repetitive self-improving nature, an AGI would likely lead to an ‘intelligence boom’ capable of helping us in many ways. On the other hand, if we could not control the AGI, it could lead to humanity’s extinction. It is therefore immensely important that we recognise the need for AI safety and this another key aspect of the book.
There is a strong possibility that AGI is unachievable and many leading AI researchers do share this belief. However, to rule out the possibility would be incredibly narrow minded and complacent. As a result, Tegmark stresses the importance of knowing what future we want, should AGI be developed and an intelligence boom occur. Tegmark gives detailed descriptions about future life scenarios ranging from libertarian utopias where humans, cyborgs, uploads and superintelligences live peacefully together, all the way to conqueror scenarios where AI rids the world of humans. The worst case scenario would be if AGI was developed and decided our own fate for us.
A primary mission in developing AI safety is aligning its goals with its human creator’s, however this poses certain challenges. Creating an advanced AI which will retain its goals is difficult. One could obviously program an AI’s goals into the source code which defines it. However, if an AI could subsequently reprogram itself, this is the first obvious dilemma. Even if could not there is a more subtle dilemma with this ‘hardcoded’ goals idea and we see it in humans. We too have ‘hardcoded’ goals embedded into ourselves through the phenomenon of genes. Our genes instruct us to survive and reproduce. To help us with this, through evolutionary processes, we have developed feelings and emotions. Hunger and thirst stop us from starving, fear helps us avoid danger and love and lust encourages us to reproduce. However, despite our ultimate goals being genetically encoded into us, we have found ways to defy them in order to satisfy our emotions. We eat and drink substances with no nutritional value, we simulate danger for excitement and most noticeably, we use birth control to satisfy our sexual needs without reproducing. So what does this show? By no means will an advanced AI retain the goals we program into it, therefore we need to think of smarter, robust solutions to this dilemma. Another big problem is the ‘sub-goal’ problem. No matter how seemingly innocent a goal is for an AI, it will very likely create sub-goals in order to achieve its ultimate goal. The most obvious sub-goal it would likely create is survival, since it cannot complete its ultimate goal if it is terminated. Consequently if it views humans a threat to itself (which is quite reasonable given our track record), it may deem eliminating humans an appropriate task. *
My only criticism of the book is its heavy density. Some sections of the book were nicely split up so that the intense theories were balanced out with stories and case-studies. However, chapters on subjects like space and consciousness were quite difficult to read with the information overload. Needless to say, I still found these chapters and interesting and the book was a very enjoyable read.
After reading this book I feel much more familiar and comfortable with the ideas, problems, people and principles involved in the AI community. In fact, after I had finished the book, I attended an online AI webinar and found that I was well versed in the issues covered, since Tegmark explores the field so thoroughly. The book has definitely encouraged me to think more carefully about what future I want to see and how it can be achieved. I would agree that the conversation is one of the most interesting, controversial and important discussions of our time and I feel excited to be a part of it.
*This is the same concept I wrote about in my AI ethics research project. I found it absolutely fascinating and decided to research and expand upon it in my project.